在这项工作中,我们提出了一个端到端双耳语音合成系统,该系统将低抑制音频编解码器与强大的双耳解码器结合在一起,该解码器能够准确地进行语音双耳化,同时忠实地重建环境因素,例如环境噪声或混响。该网络是经过修改的矢量定量变异自动编码器,经过训练,采用了几个精心设计的目标,包括对抗性损失。我们在具有客观指标和感知研究的内部双耳数据集上评估了所提出的系统。结果表明,所提出的方法比以前的方法更接近地面真相数据。特别是,我们证明了对抗性损失在捕获创建真实听觉场景所需的环境效果中的能力。
translated by 谷歌翻译
A hallmark of human intelligence is the ability to learn new concepts purely from language. Several recent approaches have explored training machine learning models via natural language supervision. However, these approaches fall short in leveraging linguistic quantifiers (such as 'always' or 'rarely') and mimicking humans in compositionally learning complex tasks. Here, we present LaSQuE, a method that can learn zero-shot classifiers from language explanations by using three new strategies - (1) modeling the semantics of linguistic quantifiers in explanations (including exploiting ordinal strength relationships, such as 'always' > 'likely'), (2) aggregating information from multiple explanations using an attention-based mechanism, and (3) model training via curriculum learning. With these strategies, LaSQuE outperforms prior work, showing an absolute gain of up to 7% in generalizing to unseen real-world classification tasks.
translated by 谷歌翻译
Many visual recognition models are evaluated only on their classification accuracy, a metric for which they obtain strong performance. In this paper, we investigate whether computer vision models can also provide correct rationales for their predictions. We propose a ``doubly right'' object recognition benchmark, where the metric requires the model to simultaneously produce both the right labels as well as the right rationales. We find that state-of-the-art visual models, such as CLIP, often provide incorrect rationales for their categorical predictions. However, by transferring the rationales from language models into visual representations through a tailored dataset, we show that we can learn a ``why prompt,'' which adapts large visual representations to produce correct rationales. Visualizations and empirical experiments show that our prompts significantly improve performance on doubly right object recognition, in addition to zero-shot transfer to unseen tasks and datasets.
translated by 谷歌翻译
Incidental supervision from language has become a popular approach for learning generic visual representations that can be prompted to perform many recognition tasks in computer vision. We conduct an in-depth exploration of the CLIP model and show that its visual representation is often strongly biased towards solving some tasks more than others. Moreover, which task the representation will be biased towards is unpredictable, with little consistency across images. To resolve this task bias, we show how to learn a visual prompt that guides the representation towards features relevant to their task of interest. Our results show that these visual prompts can be independent of the input image and still effectively provide a conditioning mechanism to steer visual representations towards the desired task.
translated by 谷歌翻译
Autonomous driving requires efficient reasoning about the location and appearance of the different agents in the scene, which aids in downstream tasks such as object detection, object tracking, and path planning. The past few years have witnessed a surge in approaches that combine the different taskbased modules of the classic self-driving stack into an End-toEnd(E2E) trainable learning system. These approaches replace perception, prediction, and sensor fusion modules with a single contiguous module with shared latent space embedding, from which one extracts a human-interpretable representation of the scene. One of the most popular representations is the Birds-eye View (BEV), which expresses the location of different traffic participants in the ego vehicle frame from a top-down view. However, a BEV does not capture the chromatic appearance information of the participants. To overcome this limitation, we propose a novel representation that captures various traffic participants appearance and occupancy information from an array of monocular cameras covering 360 deg field of view (FOV). We use a learned image embedding of all camera images to generate a BEV of the scene at any instant that captures both appearance and occupancy of the scene, which can aid in downstream tasks such as object tracking and executing language-based commands. We test the efficacy of our approach on synthetic dataset generated from CARLA. The code, data set, and results can be found at https://rebrand.ly/APP OCC-results.
translated by 谷歌翻译
Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline.
translated by 谷歌翻译
自2016年成立以来,Alexa奖计划使数百名大学生能够通过Socialbot Grand Challenge探索和竞争以发展对话代理商。挑战的目的是建立能够与人类在流行主题上连贯而诱人的代理人20分钟,同时达到至少4.0/5.0的平均评分。但是,由于对话代理商试图帮助用户完成日益复杂的任务,因此需要新的对话AI技术和评估平台。成立于2021年的Alexa奖Taskbot Challenge建立在Socialbot Challenge的成功基础上,通过引入交互式协助人类进行现实世界烹饪和做自己动手做的任务的要求,同时同时使用语音和视觉方式。这项挑战要求TaskBots识别和理解用户的需求,识别和集成任务和域知识,并开发新的方式,不分散用户的注意力,而不必分散他们的任务,以及其他挑战。本文概述了Taskbot挑战赛,描述了使用Cobot Toolkit提供给团队提供的基础架构支持,并总结了参与团队以克服研究挑战所采取的方法。最后,它分析了比赛第一年的竞争任务机器人的性能。
translated by 谷歌翻译
我们提出DIY-IPS - 自己动手 - 室内定位系统,这是一个开源实时室内定位移动应用程序。DIY-IPS通过使用可用WiFi接入点的双波段RSSI指纹识别来检测用户的室内位置。该应用程序可以无需额外的基础设施费用即可实时检测用户的室内位置。我们发布了我们的应用程序作为开源,以节省其他研究人员的时间来重新创建它。该应用程序使研究人员/用户能够(1)使用地面真相标签收集室内定位数据集,(2)以更高的准确性或其他研究目的自定义应用程序(3)通过用地面真相实时测试测试修改方法的准确性。我们进行了初步实验,以证明应用程序的有效性。
translated by 谷歌翻译
我们将人机协作问题解决的问题视为一项计划任务,再加上自然语言交流。我们的框架由三个组成部分组成 - 一种自然语言引擎,将语言话语解析为正式代表,反之亦然,这是一个概念学习者,该概念学习者基于与用户的有限互动来诱导计划的广义概念,以及解决方案的HTN规划师,以解决该计划。基于人类互动的任务。我们说明了该框架通过在基于Minecraft的Blocksworld域中的协作构建任务中证明协作问题解决的关键挑战的能力。随附的演示视频可在https://youtu.be/q1pwe4aahf0上获得。
translated by 谷歌翻译
变异自动编码器(VAE)遭受后塌陷的苦难,其中用于建模和推理的强大神经网络在没有有意义使用潜在表示的情况下优化了目标。我们引入了推理评论家,通过需要潜在变量和观测值之间的对应关系来检测和激励后塌陷。通过将批评家的目标与自我监督的对比表示学习中的文献联系起来,我们从理论和经验上展示了优化推论批评家在观察和潜伏期之间增加相互信息,从而减轻后验崩溃。这种方法可以直接实施,并且需要比以前的方法要少得多的培训时间,但在三个已建立的数据集中获得了竞争结果。总体而言,该方法奠定了基础,以弥合先前与各种自动编码器的对比度学习和概率建模的框架,从而强调了两个社区在其交叉点上可能会发现的好处。
translated by 谷歌翻译